banner-frontier

Biased AI

‘As We Code, So We Reap’

Debanjan Borthakur

Human prejudice stretches back millennia, and the seeds of racism and bias that people sowed long ago have now taken root and flourished within artificial intelligence. Bias existed long before machine learning algorithms emerged; whenever society invents a new technology, it inherits the prejudices and discrimination of earlier eras. In the nineteenth century, redlining maps dictated who could receive loans–systematically denying Black Americans access to mortgages, insurance, and other essential financial services. Today’s credit-scoring algorithms still mirror those same exclusions. As AI extends into recruitment, administration, medicine, and the media, alarm bells are sounding: if people do not imbue their machines with ethical values, they will merely magnify their deepest biases.

Just a few days ago, this writer encountered an article generated by AI–yet its prose unmistakably reflected human prejudice. While biases introduced via “prompt framing” are easy to detect, the subtler distortions in AI run far deeper, rooted in history. Jim Crow laws once codified racial segregation across the American South; decades later, the Homeowners’ Loan Corporation produced “redlining maps” that labelled predominantly Black neighbourhoods as “hazardous,” denying residents’ access to loans to buy or improve homes. Those bureaucratic red lines manufactured inequality–and their legacy persists in today’s data. Similar forms of institutional discrimination have appeared around the world: Canada’s “Chinese Head Tax” targeted Chinese immigrants; during World War II, the United States forcibly interned Japanese Americans; and for over forty years under the guise of “scientific research,” the Tuskegee syphilis study denied Black men treatment. These racist policies were enshrined in law and practice, normalising prejudice.

Why do these historical injustices matter now? Modern AI relies on vast repositories of documents–laws, court records, medical files, employment histories–that often carry those same discriminatory patterns. When people train AI models on data imbued with old institutional inequalities, and fail to correct for them, they risk recreating those injustices at digital scale and speed.

There is alarming digital echoes of this history. In 2015, Google Photos infamously labelled images of dark-skinned individuals as “gorillas,” reviving dehumanising comparisons once levelled against Black people. COMPAS, a software tool predicting recidivism, rated Black defendants as significantly higher risk than white defendants–a reflection of biased “stop-and-frisk” policing data. Amazon’s automated résumé filter downgraded applications containing the word “women,” revealing entrenched gender bias. In Detroit, a facial-recognition error led to the wrongful arrest of Robert Williams, a Black man, even though he was innocent. Each example underscores that AI systems mirror the biases in their training data. If history is skewed, AI will be, too.

So why isn’t AI neutral? One major culprit is biased training data. As Brian Christian describes in “The Alignment Problem”, facial-recognition datasets underrepresented darker-skinned faces, making the models less accurate for those groups. AI’s sole objective is to maximise performance metrics–its “points”–without regard for human values. Compounding this, deep neural networks contain millions of hidden parameters, rendering their decision-making processes largely opaque–even to their creators. This disconnection between AI’s optimisation targets and human ethical standards is known as the alignment problem: AI will pursue its programmed goals relentlessly, even if they conflict with values, because it lacks an emotional or moral compass. Nick Bostrom’s “paperclip maximizer” thought experiment dramatises this risk: an AI instructed solely to produce paperclips might convert the entire planet into a factory to achieve its quota. Though hypothetical, it vividly illustrates the stakes.

To guide AI toward fairness and equity, experts have proposed the RICE framework: Robustness, Interpretability, Controllability, and Ethicality. Under RICE, AI systems must operate reliably across diverse cultural and national contexts; users should understand the rationale behind AI decisions; humans must maintain ultimate control; and AI must embody moral values such as justice and equality. But can one fix bias simply by refining data and algorithms? Unless people address the broader societal inequalities, discrimination, and irregularities that underlie data, AI will continue to absorb and reproduce these injustices–perhaps even accelerating them. Human values evolve over time, and AI must evolve in step. Artificial intelligence lacks a conscience of its own; it reflects only the information people provide. If that information is flawed, biased, or racist, AI will keep repeating history’s darkest chapters.

[DebanjanBorthakur is a Doctoral Candidate (U. of Toronto) and researcher in psychology and neuroscience with a master’s degree (McMaster U.). His research explores the intersection of social and health psychology. Debanjan is a passionate activist for psychological well-being in work environments and a member of the academic parity movement. Partnered with The Canadian Institute of Workplace Harassment and Violence.]
(Originally published in american diversity report.com)

Back to Home Page

Frontier
Vol 58, No. 19, Nov 2 - 8, 2025